CS 269 : Machine Learning Theory Lecture 4 : Infinite Function Classes

نویسندگان

  • Jennifer Wortman Vaughan
  • Luca Valente
  • Palash Agrawal
چکیده

Before stating Hoeffding’s Inequality, we recall two intermediate results that we will use in order to prove it. One is Markov’s Inequality and other is Hoeffding’s Lemma. (Note that in class we did not cover Hoeffding’s Lemma, and only gave a brief outline of the Chernoff Bounding Techniques and how they are used to prove Hoeffding’s Inequality. Here we give a full proof of Hoeffding’s Inequality for completeness.) Theorem 2. (Markov’s Inequality) Let X be a non negative random variable, for any K > 0,

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

CS 269 : Machine Learning Theory Lecture 14 : Generalization Error of Adaboost

In this lecture we will continue our discussion of the Adaboost algorithm and derive a bound on the generalization error. We saw last time that the training error decreases exponentially with respect to the number of rounds T . However, we also want to see the performance of this algorithm on new test data. Today we will show why the Adaboost algorithm generalizes so well and why it avoids over...

متن کامل

CS 269 : Machine Learning Theory Lecture 16 : SVMs and Kernels

We previously showed that the solution to the primal problem is equivalent to the solution to the dual problem if they satisfy the following primal-dual equivalence conditions. First, we need a convex objective function and in our case, it is 1 2 ||~ w||2. Second, we need convex inequality constraints gi, which are 1−yi(~ w · ~xi + b) for i = 1, ...,m. The last condition states that for each in...

متن کامل

Cs 880: Advanced Complexity Theory Lecture 7: Passive Learning 1 Computational Learning Theory

In the previous lectures, we studied harmonic analysis as a tool for analyzing structural properties of Boolean functions, and we developed some property testers as applications of this tool. In the next few lectures, we study applications in computational learning theory. Harmonic analysis turns out to be useful for designing and analyzing efficient learning algorithms under the uniform distri...

متن کامل

CS 880 : Advanced Complexity Theory 2 / 11 / 2008 Lecture 8 : Active Learning

Last time we studied computational learning theory and saw how harmonic analysis could be used to design and analyze efficient learning algorithms with respect to the uniform distribution. We developed a generic passive learning algorithm for concepts whose Fourier spectrum is concentrated on a known set, and applied it to decision trees. We also started developing an approach for the case wher...

متن کامل

CS 269 : Machine Learning Theory

The idea behind boosting is to construct an accurate hypothesis from a set of hypotheses that are each guaranteed to perform slightly better than random guessing on particular distributions of data. This idea originally arose from attempts to prove the robustness of the PAC model. By robustness, we mean the notion that slight alterations to the model definitions should not result in dramaticall...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2010